Goto

Collaborating Authors

 practical approach


Deep Reinforcement Learning-based Quadcopter Controller: A Practical Approach and Experiments

Do, Truong-Dong, Mung, Nguyen Xuan, Hong, Sung Kyung

arXiv.org Artificial Intelligence

Quadcopters have been studied for decades thanks to their maneuverability and capability of operating in a variety of circumstances. However, quadcopters suffer from dynamical nonlinearity, actuator saturation, as well as sensor noise that make it challenging and time consuming to obtain accurate dynamic models and achieve satisfactory control performance. Fortunately, deep reinforcement learning came and has shown significant potential in system modelling and control of autonomous multirotor aerial vehicles, with recent advancements in deployment, performance enhancement, and generalization. In this paper, an end-to-end deep reinforcement learning-based controller for quadcopters is proposed that is secure for real-world implementation, data-efficient, and free of human gain adjustments. First, a novel actor-critic-based architecture is designed to map the robot states directly to the motor outputs. Then, a quadcopter dynamics-based simulator was devised to facilitate the training of the controller policy. Finally, the trained policy is deployed on a real Crazyflie nano quadrotor platform, without any additional fine-tuning process. Experimental results show that the quadcopter exhibits satisfactory performance as it tracks a given complicated trajectory, which demonstrates the effectiveness and feasibility of the proposed method and signifies its capability in filling the simulation-to-reality gap.


Overlaying classifiers: a practical approach for optimal ranking

Neural Information Processing Systems

ROC curves are one of the most widely used displays to evaluate performance of scoring functions. In the paper, we propose a statistical method for directly optimizing the ROC curve. The target is known to be the regression function up to an increasing transformation and this boils down to recovering the level sets of the latter. We propose to use classifiers obtained by empirical risk minimization of a weighted classification error and then to construct a scoring rule by overlaying these classifiers. We show the consistency and rate of convergence to the optimal ROC curve of this procedure in terms of supremum norm and also, as a byproduct of the analysis, we derive an empirical estimate of the optimal ROC curve.


An Approximate Inference Approach to Temporal Optimization in Optimal Control

Neural Information Processing Systems

Algorithms based on iterative local approximations present a practical approach to optimal control in robotic systems. However, they generally require the temporal parameters (for e.g. the movement duration or the time point of reaching an intermediate goal) to be specified \textit{a priori}. Here, we present a methodology that is capable of jointly optimising the temporal parameters in addition to the control command profiles. The presented approach is based on a Bayesian canonical time formulation of the optimal control problem, with the temporal mapping from canonical to real time parametrised by an additional control variable. An approximate EM algorithm is derived that efficiently optimises both the movement duration and control commands offering, for the first time, a practical approach to tackling generic via point problems in a systematic way under the optimal control framework. The proposed approach is evaluated on simulations of a redundant robotic plant.


Recommender Systems: An Applied Approach using Deep Learning - CouponED

#artificialintelligence

Have you ever thought how YouTube adjust your feed as per your favorite content? Why is your Netflix recommending you your favorite TV shows? Have you ever wanted to build a customized deep learning-based recommender system for yourself? If Yes! Then this is the course you are looking for. You might have searched for many relevant courses, but this course is different!


A Practical Approach to Timeseries Forecasting using Python

#artificialintelligence

Have you ever wondered, how weather predictions are made? Have you ever thought to estimate the global population in 2050! What if, someone told you that you can predict the expected life of our universe by just sitting next to your laptop in your home. You might have searched for many relevant courses, but this course is different! This course is a complete package for the beginners to learn time series, data analysis and forecasting methods from scratch.


Recommender Systems with Machine Learning

#artificialintelligence

This course is a complete package for the beginners to learn the basics of recommender systems, its applications and building it from scratch by using machine learning with python. Every module has engaging content covering necessary theoretical concepts with a complete practical approach is used in along with brief theoretical concepts. At the end of every module, we assign you a quiz, the solution to the quizzes is also available in the next video. We will be starting with the theoretical concepts of recommender systems, after providing you the basic knowledge of recommender systems. You will be able to learn about the important taxonomies of recommender systems which are actually the basic building block of it.


Active Learning: A Practical Approach to Improve Your Data Labeling Experience

#artificialintelligence

Okay, let's talk about the one thing which doesn't get that much attention in the data science realm: labeling your data. It's a painful process, and that may lead to its disregard in tutorials you found on the internet or bootcamps you joined. However, it's one of the most crucial components in the data pipeline, you know, garbage in garbage out. A bad label leads to a bad model and a bad production practice. A data-centric approach to machine learning recently has sparked this idea into a whole new research playground.


Getting Deep Learning working in the wild: A Data-Centric Course - KDnuggets

#artificialintelligence

Have you been excited by recent high profile deep learning successes, but not sure how to practically keep deep learning models working for your project? We've developed a distilled set of materials on data-centric deep learning approaches – which are often among the most impactful tools to get deep learning models working on new tasks. Data-centric deep learning is a relatively new area and a broad term. For us, being data-centric means taking a different perspective on deep learning that's centered around building and maintaining the datasets which define and evaluate deep learning models. The real-world applications and successes of deep learning systems are growing by the day.


Practical AI with Python and Reinforcement Learning

#artificialintelligence

This course is in an "early bird" release, and we're still updating and adding content to it, please keep in mind before enrolling that the course is not yet complete. "The future is already here – it's just not very evenly distributed." Have you ever wondered how Artificial Intelligence actually works? Do you want to be able to harness the power of neural networks and reinforcement learning to create intelligent agents that can solve tasks with human level complexity? This is the ultimate course online for learning how to use Python to harness the power of Neural Networks to create Artificially Intelligent agents! This course focuses on a practical approach that puts you in the driver's seat to actually build and create intelligent agents, instead of just showing you small toy examples like many other online courses.


How to train your deep learning models in a distributed fashion.

#artificialintelligence

Deep learning algorithms are well suited for large data sets and also training deep learning networks needs large computation power. With GPUs / TPUs easily available on pay per use basis or for free (like Google collab), it is possible today to train a large neural network on cloud-like say Resnet 152 (152 layers) on ImageNet database which has around 14 million images. But is a multi-core GPU-enabled machine just enough to train huge models. Technically yes, but it might take weeks to train the model. So how do we reduce the training time?